密集的段落检索旨在根据查询和段落的密集表示(即矢量)从大型语料库中检索查询的相关段落。最近的研究探索了改善预训练的语言模型,以提高密集的检索性能。本文提出了COT-MAE(上下文掩盖自动编码器),这是一种简单而有效的生成性预训练方法,可用于密集通道检索。 COT-MAE采用了不对称的编码器架构,该体系结构学会通过自我监督和上下文监督的掩盖自动编码来将句子语义压缩到密集的矢量中。精确,自我监督的掩盖自动编码学会学会为文本跨度内的令牌的语义建模,并学习上下文监督的蒙版自动编码学学习以建模文本跨度之间的语义相关性。我们对大规模通道检索基准进行实验,并显示出对强基础的大量改进,证明了COT-MAE的效率很高。
translated by 谷歌翻译
运动多项式(与非零实际规范的双重四聚体上的多项式)描述了合理运动。我们提出了减少有界运动多项式的必要条件,以将因素化为线性因子,并给出了一种计算它们的算法。我们可以使用这些线性因子来构建机制,因为分数对应于合理运动分解为简单旋转或翻译。有界的运动多项式始终在乘以合适的实际或四元素多项式后,将分解成线性因子。我们的因素化标准使我们能够改善早期算法,以计算合适的真实或四元素多项式辅助因素。
translated by 谷歌翻译
高光谱图像的聚类是一个基本而具有挑战性的任务。最近的高光谱图像聚类的发展已经从浅模型演变为深度,并且在许多基准数据集中实现了有希望的效果。然而,它们较差的可扩展性,稳健性和泛化能力,主要是由离线聚类方案引起的,极大地将其应用限制为大型超光谱数据。为了规避这些问题,我们基于自我监督学习呈现了一个可扩展的深度在线聚类模型,名为Spectral-Spatial对比聚类(SSCC)。具体地,我们利用了由簇号的一维的投影头组成的对称双神经网络,以从光谱空间增强池进行双重对比度学习。我们通过隐式鼓励在群集内相似度和群集冗余之间来定义目标函数。由此产生的方法通过批量优化以端到端的方式培训,使其在大规模数据中具有稳健性,并导致未经看明数据的良好概括能力。三个高光谱图像基准的广泛实验证明了我们的方法的有效性,并表明我们通过大型边缘推进最先进的方法。
translated by 谷歌翻译
本文提出了FLGC,这是一个简单但有效的全线性图形卷积网络,用于半监督和无人监督的学习。基于计算具有解耦步骤的全局最优闭合液解决方案而不是使用梯度下降,而不是使用梯度下降。我们展示(1)FLGC强大的是处理图形结构化数据和常规数据,(2)具有闭合形式解决方案的训练图卷积模型提高了计算效率而不会降低性能,而(3)FLGC作为自然概括非欧几里德域的经典线性模型,例如Ridge回归和子空间聚类。此外,我们通过引入初始剩余策略来实现半监督的FLGC和无监督的FLGC,使FLGC能够聚集长距离邻域并减轻过平滑。我们将我们的半监督和无人监督的FLGC与各种分类和聚类基准的许多最先进的方法进行比较,表明建议的FLGC模型在准确性,鲁棒性和学习效率方面始终如一地优于先前的方法。我们的FLGC的核心代码在https://github.com/angrycai/flgc下发布。
translated by 谷歌翻译
我们提供了悖论性的闭环$ n $ linkages的完整分类,其中$ n \ geq6 $的移动性$ n-4 $或更高版本包含revolute,Prismatic或Helical关节。我们还明确地写下了$ nr $ links $ n-5 $的$ nr $链接的强大必要条件。我们的主要新工具是链接$ l $与另一个链接$ l'$之间的几何关系,这是由于将方程式添加到$ l $的配置空间而产生的。然后,我们使用此关系提高了$ l'ub $ $ l $的已知分类结果。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译